We consider the supervised training setting in which we learn task-specificword embeddings. We assume that we start with initial embeddings learned fromunlabelled data and update them to learn task-specific embeddings for words inthe supervised training data. However, for new words in the test set, we mustuse either their initial embeddings or a single unknown embedding, which oftenleads to errors. We address this by learning a neural network to map frominitial embeddings to the task-specific embedding space, via a multi-lossobjective function. The technique is general, but here we demonstrate its usefor improved dependency parsing (especially for sentences without-of-vocabulary words), as well as for downstream improvements on sentimentanalysis.
展开▼